Goto

Collaborating Authors

 Chula Vista


Enhancing Road Safety Through Multi-Camera Image Segmentation with Post-Encroachment Time Analysis

Chaudhuri, Shounak Ray, Jahangiri, Arash, Paolini, Christopher

arXiv.org Artificial Intelligence

Abstract--Traffic safety analysis at signalized intersections is vital for reducing vehicle and pedestrian collisions, yet traditional crash-based studies are limited by data sparsity and latency. This paper presents a novel multi-camera computer vision framework for real-time safety assessment through Post-Encroachment Time (PET) computation, demonstrated at the intersection of H Street and Broadway in Chula Vista, California. Four synchronized cameras provide continuous visual coverage, with each frame processed on NVIDIA Jetson AGX Xavier devices using YOLOv11 segmentation for vehicle detection. Detected vehicle polygons are transformed into a unified bird's-eye map using homography matrices, enabling alignment across overlapping camera views. A novel pixel-level PET algorithm measures vehicle position without reliance on fixed cells, allowing fine-grained hazard visualization via dynamic heatmaps, accurate to 3.3 sq-cm. Timestamped vehicle and PET data is stored in an SQL database for long-term monitoring. Results over various time intervals demonstrate the framework's ability to identify high-risk regions with sub-second precision and real-time throughput on edge devices, producing data for an 800 800 pixel logarithmic heatmap at an average of 2.68 FPS. A. Context and Motivation Traffic safety at signalized intersections remains a critical concern in urban planning, as intersections present challenges of high vehicle conflict and elevated accident risk. Large and open intersections, in particular, present challenges due to increased vehicle maneuvering space, multiple conflict points, and reduced natural traffic control, which leads to higher speeds and greater uncertainty in driver behavior.


Vision-based Navigation of Unmanned Aerial Vehicles in Orchards: An Imitation Learning Approach

Wei, Peng, Ragbir, Prabhash, Vougioukas, Stavros G., Kong, Zhaodan

arXiv.org Artificial Intelligence

Autonomous unmanned aerial vehicle (UAV) navigation in orchards presents significant challenges due to obstacles and GPS-deprived environments. In this work, we introduce a learning-based approach to achieve vision-based navigation of UAVs within orchard rows. Our method employs a variational autoencoder (VAE)-based controller, trained with an intervention-based learning framework that allows the UAV to learn a visuomotor policy from human experience. Field experiments demonstrate that after only a few iterations of training, the proposed VAE-based controller can autonomously navigate the UAV based on a front-mounted camera stream. The controller exhibits strong obstacle avoidance performance, achieves longer flying distances with less human assistance, and outperforms existing algorithms. Furthermore, we show that the policy generalizes effectively to novel environments and maintains competitive performance across varying conditions and speeds. This research not only advances UAV autonomy but also holds significant potential for precision agriculture, improving efficiency in orchard monitoring and management. Introduction Unmanned aerial vehicle (UAV) technology has made significant progress in recent years, particularly for applications in agriculture. The ability to navigate within orchard rows allows UAVs to perform tasks such as crop inspection and yield estimation (Zhang et al., 2021). This capability provides a valuable tool for remote sensing and precision agriculture (Chen et al., 2022), leading to more efficient and improved orchard management. However, most existing UAVs still depend on GPS for navigation in agricultural settings. This reliance limits their ability to operate in confined orchard rows, where dense tree canopies can block GPS signals. Additionally, in environments with unknown obstacles, such as tree branches in orchard rows, human pilots are frequently queried to provide avoidance maneuvers, which significantly increases their workload. The ability to navigate autonomously and safely in orchard scenes with weak GPS signals and obstacles presents several challenges and largely hinders the deployment of UAVs in orchard operations. Corresponding author Email address: zdkong@ucdavis.edu The view of the onboard camera is provided. When the GPS signal is attenuated, the UAV may rely on exteroceptive sensors to sense the environment and navigate. Advanced techniques to enable UAV autonomous operations without GPS include: 1) lidar-based, and 2) camera-based approaches.


Police tech can sidestep facial recognition bans now

MIT Technology Review

Companies like Flock and Axon sell suites of sensors--cameras, license plate readers, gunshot detectors, drones--and then offer AI tools to make sense of that ocean of data (at last year's conference I saw schmoozing between countless AI-for-police startups and the chiefs they sell to on the expo floor). Departments say these technologies save time, ease officer shortages, and help cut down on response times. Those sound like fine goals, but this pace of adoption raises an obvious question: Who makes the rules here? When does the use of AI cross over from efficiency into surveillance, and what type of transparency is owed to the public? In some cases, AI-powered police tech is already driving a wedge between departments and the communities they serve.


What's next for drones

MIT Technology Review

These developments raise a number of questions: Are drones safe enough to be flown in dense neighborhoods and cities? Is it a violation of people's privacy for police to fly drones overhead at an event or protest? Who decides what level of drone autonomy is acceptable in a war zone? Those questions are no longer hypothetical. Advancements in drone technology and sensors, falling prices, and easing regulations are making drones cheaper, faster, and more capable than ever.


More blue cities using drones for some 911 calls, expert says: 'They can't get cops'

FOX News

Quick, efficient and with a bird's eye view of any scene, more police departments are embracing the use of drones to carry out law enforcement work, with some blue cities now even using them to respond to 911 calls. Around 1,500 police departments across the country are currently using drones in some form, according to a report by the Electronic Frontier Foundation, a digital privacy group, with agencies deploying the technology for crowd control purposes, missing people searches, tracking fleeing suspects or mapping crime scenes. Steep budget cuts and dwindling staff numbers in blue cities, in particular, make drones both an effective and cost-saving tool for police in Democratic strongholds. A law enforcement official sets up a drone during a manhunt for suspect Robert Card following a mass shooting on Oct. 27, 2023, in Monmouth, Maine. Today's police drones are much bigger than regular drones commonly used for recreational purposes, with much longer battery lives and features such as thermal sensors, loudspeakers, spotlights or beacons.


The Download: police drones, and the Supreme Court's web cases

MIT Technology Review

In the skies above Chula Vista, California, where the police department runs a drone program 10 hours a day, seven days a week, it's not uncommon to see an unmanned aerial vehicle darting across the sky. Chula Vista is one of a dozen departments in the US that operate what are called drone-as-first-responder programs, where drones are dispatched by pilots, who are listening to live 911 calls, and often arrive first at the scenes of accidents, emergencies, and crimes, cameras in tow. But many argue that police forces' adoption of drones is happening too quickly. The use of drones as surveillance tools and first responders is a fundamental shift in policing, one without a well-informed public debate around privacy regulations, tactics, and limits. There's also little evidence available of its efficacy, with scant proof that drone policing reduces crime.


Meet Flippy, Sippy and Chippy: These robots can cook fries, pour drinks and make tortilla chips

Daily Mail - Science & tech

Whether it's creating perfectly cooked fries and burgers or pouring soda without any spills, robot chefs are venturing further into the $296 billion U.S. fast food industry amid a nationwide labor shortage. Miso Robotics, a California-based company, built a kitchen bot called Flippy that was able to cook 300 burgers per day and then expanded into whipping up fries with the second version. The fast-casual chain Wing Zone in May inked a deal with Miso to install Flippy 2 into all future restaurant locations. Jack in the Box is deploying that same machine along with the company's Sippy bot - which quickly pours, labels and seals beverage orders - this year with a goal of getting into 10 high-volume locations in 2023. Whether it's creating perfectly cooked fries and burgers or pouring soda without any spills, robot chefs are venturing further into the $296 billion U.S. fast food industry amid a nationwide labor shortage And Miso has another machine called Chippy that can cook up Chipotle's tortilla chips - which will be integrated into a southern California location of the Mexican restaurant this year.


Children aren't as good at recognizing masked faces as adults, study finds

FOX News

Dr. Tom Frieden weighs in on what's next after the Omicron variant. Children have a more difficult time recognizing faces that are masked than adults, which could harm their ability to "navigate through social interactions with their peers and teachers," according to a newly released study. Erez Freud, a researcher at York University, who published his findings on Monday in the journal Cognitive Research: Principles & Implications. Freud, along with two professors from Israel's Ben-Gurion University, gave 72 children between the ages of 6 and 14 the Cambridge Face Memory Test, which measures facial perception abilities by presenting people with and without masks while upright and inverted. When masks were included in the presentation, it led to a "profound deficit in face perception abilities" that was "more pronounced in children compared to adults," according to the study.


Race to find a cure

FOX News

If Zoe Dewaghe wants ice cream for breakfast, she gets ice cream for breakfast. There's a different set of rules for her younger brother, Zach: He gets oatmeal instead. That's because five-year-old Zoe Dewaghe has a rare genetic disease called Sanfilippo syndrome. She'll gradually lose the ability to speak, to move, to recognize her surroundings. Most patients don't live into adulthood. "Once we found out what was wrong with her, we were like, 'You can eat whatever the heck you want,'" Zoe's mother, Liz, said ruefully. "Because pretty soon, you won't be able to eat." There's no approved treatment for Sanfilippo syndrome.


Emotional Context in Imitation-Based Learning in Multi-Agent Societies

Trajkovski, Goran (United States University) | Sibley, Benjamin (University of Wisconsin-Milwaukee)

AAAI Conferences

In this paper we explain how IETAL agents learn their environment, and how they build their intrinsic, internal representation of it, which they then use to build their expectations when on quest to satisfy its active drives. As environments change (with or without other agents present in them), the agents learn to new and “forget” irrelevant, “old” associations made. We discuss the concept of emotional context of associations, and show a gallery of simulations of behaviors in small multiagent societies.